今天消费者提供的各种数字付款选择是过去十年的电子商务交易的关键驱动因素。不幸的是,这也升起了网络犯罪分子和欺诈者,通过部署日益复杂的欺诈攻击,在这些系统中不断寻找漏洞。典型的欺诈检测系统采用标准的监督学习方法,重点是最大化欺诈召回率。但是,我们认为这种配方可以导致次优的解决方案。这些欺诈型号的设计要求要求它们对数据中的高级不平衡具有强大,适应欺诈模式的变化,维持欺诈率与下降率之间的平衡,以最大限度地提高收入,并可均可用于异步反馈由于通常在交易和欺诈意识之间存在显着的滞后。为实现这一目标,我们将欺诈检测作为奖励功能中模型内的实用性最大化作为顺序决策问题。历史下降率和欺诈率定义了由批准或拒绝交易的二进制动作空间的系统状态。在这项研究中,我们主要关注实用的最大化并探索此目的不同的奖励功能。已经使用深度Q-Learning进行了两种公开的欺诈数据集,并与不同的分类器相比,已经评估了拟议的欺诈数据集。我们的目标是在未来的工作中解决其余问题。
translated by 谷歌翻译
Targeted syntactic evaluations of language models ask whether models show stable preferences for syntactically acceptable content over minimal-pair unacceptable inputs. Most targeted syntactic evaluation datasets ask models to make these judgements with just a single context-free sentence as input. This does not match language models' training regime, in which input sentences are always highly contextualized by the surrounding corpus. This mismatch raises an important question: how robust are models' syntactic judgements in different contexts? In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality. We find that model judgements are generally robust when placed in randomly sampled linguistic contexts. However, they are substantially unstable for contexts containing syntactic structures matching those in the critical test content. Among all tested models (GPT-2 and five variants of OPT), we significantly improve models' judgements by providing contexts with matching syntactic structures, and conversely significantly worsen them using unacceptable contexts with matching but violated syntactic structures. This effect is amplified by the length of the context, except for unrelated inputs. We show that these changes in model performance are not explainable by simple features matching the context and the test inputs, such as lexical overlap and dependency overlap. This sensitivity to highly specific syntactic features of the context can only be explained by the models' implicit in-context learning abilities.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
The widespread of offensive content online, such as hate speech and cyber-bullying, is a global phenomenon. This has sparked interest in the artificial intelligence (AI) and natural language processing (NLP) communities, motivating the development of various systems trained to detect potentially harmful content automatically. These systems require annotated datasets to train the machine learning (ML) models. However, with a few notable exceptions, most datasets on this topic have dealt with English and a few other high-resource languages. As a result, the research in offensive language identification has been limited to these languages. This paper addresses this gap by tackling offensive language identification in Sinhala, a low-resource Indo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce the Sinhala Offensive Language Dataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level, improving the explainability of the ML models. SOLD is the first large publicly available offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
translated by 谷歌翻译
大型语言模型(LLM)从人类的指示中解开了任务计划的新功能。但是,事先尝试将LLMS应用于现实世界的机器人任务受到周围场景中缺乏接地的限制。在本文中,我们开发了NLMAP,这是一个开放式摄影和可查询场景表示,以解决此问题。 NLMAP是一个框架,可以将上下文信息收集到LLM计划者中,从而在生成上下文条件条件计划之前,可以在场景中查看和查询可用的对象。 NLMAP首先使用视觉语言模型(VLM)建立自然语言可查询场景表示。基于LLM的对象建议模块解析指令并提出涉及的对象,以查询场景表示以获取对象可用性和位置。然后,LLM规划师计划提供有关场景的此类信息。 NLMAP允许机器人在没有固定的对象列表或可执行选项的情况下操作,从而使真实的机器人操作无法通过以前的方法实现。项目网站:https://nlmap-saycan.github.io
translated by 谷歌翻译
大型语言模型可以编码有关世界的大量语义知识。这种知识对于旨在采取自然语言表达的高级,时间扩展的指示的机器人可能非常有用。但是,语言模型的一个重大弱点是,它们缺乏现实世界的经验,这使得很难利用它们在给定的体现中进行决策。例如,要求语言模型描述如何清洁溢出物可能会导致合理的叙述,但是它可能不适用于需要在特定环境中执行此任务的特定代理商(例如机器人)。我们建议通过预处理的技能来提供现实世界的基础,这些技能用于限制模型以提出可行且在上下文上适当的自然语言动作。机器人可以充当语​​言模型的“手和眼睛”,而语言模型可以提供有关任务的高级语义知识。我们展示了如何将低级技能与大语言模型结合在一起,以便语言模型提供有关执行复杂和时间扩展说明的过程的高级知识,而与这些技能相关的价值功能则提供了连接必要的基础了解特定的物理环境。我们在许多现实世界的机器人任务上评估了我们的方法,我们表明了对现实世界接地的需求,并且这种方法能够在移动操纵器上完成长远,抽象的自然语言指令。该项目的网站和视频可以在https://say-can.github.io/上找到。
translated by 谷歌翻译
我们提供了一个免费的和开源工具,用于创建基于Web的调查,包括文本注释任务。现有工具提供文本注释或调查功能,但并不是两者。结合两个输入类型对于调查读者对文本的看法特别相关,这也取决于读者的背景,例如年龄,性别和教育。我们的工具主要迎合了图书馆和信息科学,社会科学和人文学科对研究的研究人员的需求,他们将内容分析进行调查,例如媒体偏见,政治交流或假新闻。
translated by 谷歌翻译
我的博士研究侧重于理解培训的神经网络模型中的语义知识,通过绘制认知科学的概念和类别研究的洞察来预测自然语言(称为语言模型或LMS)。我提出了一个受“归纳推理”启发的框架,这是一种揭示人类如何利用背景知识来实现归纳的现象,并从关于概念及其属性的新信息中展开归纳。从研究归纳推理的实验中绘制,我建议使用人类感应文献中观察到的现象分析LMS中的语义感应概括,调查诸如隐性推理和紧急特征识别等任务的归纳行为,以及分析和将感应动态分析到学习的概念表示空间。
translated by 谷歌翻译